98 research outputs found

    Low statistical power in biomedical science:a review of three human research domains

    Get PDF
    Studies with low statistical power increase the likelihood that a statistically significant finding represents a false positive result. We conducted a review of meta-analyses of studies investigating the association of biological, environmental or cognitive parameters with neurological, psychiatric and somatic diseases, excluding treatment studies, in order to estimate the average statistical power across these domains. Taking the effect size indicated by a meta-analysis as the best estimate of the likely true effect size, and assuming a threshold for declaring statistical significance of 5%, we found that approximately 50% of studies have statistical power in the 0–10% or 11–20% range, well below the minimum of 80% that is often considered conventional. Studies with low statistical power appear to be common in the biomedical sciences, at least in the specific subject areas captured by our search strategy. However, we also observe evidence that this depends in part on research methodology, with candidate gene studies showing very low average power and studies using cognitive/behavioural measures showing high average power. This warrants further investigation

    A service-oriented architecture for integrating the modeling and formal verification of genetic regulatory networks

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The study of biological networks has led to the development of increasingly large and detailed models. Computer tools are essential for the simulation of the dynamical behavior of the networks from the model. However, as the size of the models grows, it becomes infeasible to manually verify the predictions against experimental data or identify interesting features in a large number of simulation traces. Formal verification based on temporal logic and model checking provides promising methods to automate and scale the analysis of the models. However, a framework that tightly integrates modeling and simulation tools with model checkers is currently missing, on both the conceptual and the implementational level.</p> <p>Results</p> <p>We have developed a generic and modular web service, based on a service-oriented architecture, for integrating the modeling and formal verification of genetic regulatory networks. The architecture has been implemented in the context of the qualitative modeling and simulation tool G<smcaps>NA</smcaps> and the model checkers N<smcaps>U</smcaps>SMV and C<smcaps>ADP</smcaps>. G<smcaps>NA</smcaps> has been extended with a verification module for the specification and checking of biological properties. The verification module also allows the display and visual inspection of the verification results.</p> <p>Conclusions</p> <p>The practical use of the proposed web service is illustrated by means of a scenario involving the analysis of a qualitative model of the carbon starvation response in <it>E. coli</it>. The service-oriented architecture allows modelers to define the model and proceed with the specification and formal verification of the biological properties by means of a unified graphical user interface. This guarantees a transparent access to formal verification technology for modelers of genetic regulatory networks.</p

    Replication Validity of Initial Association Studies:A Comparison between Psychiatry, Neurology and Four Somatic Diseases

    Get PDF
    CONTEXT:There are growing concerns about effect size inflation and replication validity of association studies, but few observational investigations have explored the extent of these problems. OBJECTIVE:Using meta-analyses to measure the reliability of initial studies and explore whether this varies across biomedical domains and study types (cognitive/behavioral, brain imaging, genetic and "others"). METHODS:We analyzed 663 meta-analyses describing associations between markers or risk factors and 12 pathologies within three biomedical domains (psychiatry, neurology and four somatic diseases). We collected the effect size, sample size, publication year and Impact Factor of initial studies, largest studies (i.e., with the largest sample size) and the corresponding meta-analyses. Initial studies were considered as replicated if they were in nominal agreement with meta-analyses and if their effect size inflation was below 100%. RESULTS:Nominal agreement between initial studies and meta-analyses regarding the presence of a significant effect was not better than chance in psychiatry, whereas it was somewhat better in neurology and somatic diseases. Whereas effect sizes reported by largest studies and meta-analyses were similar, most of those reported by initial studies were inflated. Among the 256 initial studies reporting a significant effect (p<0.05) and paired with significant meta-analyses, 97 effect sizes were inflated by more than 100%. Nominal agreement and effect size inflation varied with the biomedical domain and study type. Indeed, the replication rate of initial studies reporting a significant effect ranged from 6.3% for genetic studies in psychiatry to 86.4% for cognitive/behavioral studies. Comparison between eight subgroups shows that replication rate decreases with sample size and "true" effect size. We observed no evidence of association between replication rate and publication year or Impact Factor. CONCLUSION:The differences in reliability between biological psychiatry, neurology and somatic diseases suggest that there is room for improvement, at least in some subdomains

    Bioinformatics Tools and Databases to Assess the Pathogenicity of Mitochondrial DNA Variants in the Field of Next Generation Sequencing

    Get PDF
    The development of next generation sequencing (NGS) has greatly enhanced the diagnosis of mitochondrial disorders, with a systematic analysis of the whole mitochondrial DNA (mtDNA) sequence and better detection sensitivity. However, the exponential growth of sequencing data renders complex the interpretation of the identified variants, thereby posing new challenges for the molecular diagnosis of mitochondrial diseases. Indeed, mtDNA sequencing by NGS requires specific bioinformatics tools and the adaptation of those developed for nuclear DNA, for the detection and quantification of mtDNA variants from sequence alignment to the calling steps, in order to manage the specific features of the mitochondrial genome including heteroplasmy, i.e., coexistence of mutant and wildtype mtDNA copies. The prioritization of mtDNA variants remains difficult, relying on a limited number of specific resources: population and clinical databases, and in silico tools providing a prediction of the variant pathogenicity. An evaluation of the most prominent bioinformatics tools showed that their ability to predict the pathogenicity was highly variable indicating that special efforts should be directed at developing new bioinformatics tools dedicated to the mitochondrial genome. In addition, massive parallel sequencing raised several issues related to the interpretation of very low mtDNA mutational loads, discovery of variants of unknown significance, and mutations unrelated to patient phenotype or the co-occurrence of mtDNA variants. This review provides an overview of the current strategies and bioinformatics tools for accurate annotation, prioritization and reporting of mtDNA variations from NGS data, in order to carry out accurate genetic counseling in individuals with primary mitochondrial diseases

    Mutations in the m-AAA proteases AFG3L2 and SPG7 are causing isolated dominant optic atrophy.

    Get PDF
    OBJECTIVE: To improve the genetic diagnosis of dominant optic atrophy (DOA), the most frequently inherited optic nerve disease, and infer genotype-phenotype correlations. METHODS: Exonic sequences of 22 genes were screened by new-generation sequencing in patients with DOA who were investigated for ophthalmology, neurology, and brain MRI. RESULTS: We identified 7 and 8 new heterozygous pathogenic variants in SPG7 and AFG3L2. Both genes encode for mitochondrial matricial AAA (m-AAA) proteases, initially involved in recessive hereditary spastic paraplegia type 7 (HSP7) and dominant spinocerebellar ataxia 28 (SCA28), respectively. Notably, variants in AFG3L2 that result in DOA are located in different domains to those reported in SCA28, which likely explains the lack of clinical overlap between these 2 phenotypic manifestations. In comparison, the SPG7 variants identified in DOA are interspersed among those responsible for HSP7 in which optic neuropathy has previously been reported. CONCLUSIONS: Our results position SPG7 and AFG3L2 as candidate genes to be screened in DOA and indicate that regulation of mitochondrial protein homeostasis and maturation by m-AAA proteases are crucial for the maintenance of optic nerve physiology

    Quantifying diet-induced metabolic changes of the human gut microbiome

    Get PDF
    The human gut microbiome is known to be associated with various human disorders, but a major challenge is to go beyond association studies and elucidate causalities. Mathematical modeling of the human gut microbiome at a genome-scale is a useful tool to decipher microbe-microbe, diet-microbe and microbe-host interactions. Here, we describe the CASINO (Community and Systems-level Interactive Optimization) toolbox, a comprehensive computational platform for analysis of microbial communities through metabolic modeling. We first validated the toolbox by simulating and testing the performance of single bacteria and whole communities in in vitro. Focusing on metabolic interactions between the diet, gut microbiota and host metabolism, we demonstrated the predictive power of the toolbox in a diet-intervention study of 45 obese and overweight individuals, and validated our predictions by fecal and blood metabolomics data. Thus, modeling could quantitatively describe altered fecal and serum amino acid levels in response to diet intervention

    Mass-spectrometry-based metabolomics: limitations and recommendations for future progress with particular focus on nutrition research

    Get PDF
    Mass spectrometry (MS) techniques, because of their sensitivity and selectivity, have become methods of choice to characterize the human metabolome and MS-based metabolomics is increasingly used to characterize the complex metabolic effects of nutrients or foods. However progress is still hampered by many unsolved problems and most notably the lack of well established and standardized methods or procedures, and the difficulties still met in the identification of the metabolites influenced by a given nutritional intervention. The purpose of this paper is to review the main obstacles limiting progress and to make recommendations to overcome them. Propositions are made to improve the mode of collection and preparation of biological samples, the coverage and quality of mass spectrometry analyses, the extraction and exploitation of the raw data, the identification of the metabolites and the biological interpretation of the results

    Recherche biomédicale et journalisme en situation d'incertitude : validité des résultats de la recherche biomédicale et couverture médiatique

    No full text
    Many academic publications are devoted to the « reproducibility crisis » in biomedical sciences. Their authors distinguish this lack of reproducibility from fraud or plagiarism. This “crisis” deals with a much larger phenomenon encompassing many scientific disciplines: a large amount of scientific results are disconfirmed by subsequent studies.This lack of reproducibility is to be expected: knowledge production is an incremental process where early, promising yet tentative findings are validated through replication. Indeed, scientific results are uncertain per se. The problem, however, is that this uncertainty does not seem to be taken into consideration when science “meets” the public, especially through the media.In this dissertation we studied how the media presented this uncertainty when dealing with biomedical findings. To do so we first created a large, original database of scientific studies investigating the association between risk factors (genetic, biochemical, environmental) and pathologies from three biomedical domains; psychiatry, neurology and a set of four somatic diseases. We evaluated the validity of each initial study by comparing their results to the result of meta-analyses on the same subject. The replication validity is low: 65% of initial studies are disconfirmed by corresponding meta-analysis even when they were published in high-ranking journals. We then identified which studies were selected by the press: initial studies published in prestigious journals and relevant to the readers were preferentially covered. Their validity was nonetheless poor with more than 50% being subsequently invalidated. The press rarely mentioned these frequent invalidations. Analysing the newspaper article contents, we found that journalists and their editors do not deal with scientific uncertainty. Indeed, the majority of newspaper articles referred to the study as being an initial study but only 21% indicated that the results needed to be replicated. Moreover those statements were made by scientists and have become scarce in most recent articles. A survey of 21 science journalists confirmed that journalists still consider high-ranking scientific journals to be reliable sources of information. However, these journalists were not familiar with the incremental process of knowledge production: two-thirds did not know that early findings were uncertain, or confused uncertainty with fraud. The other third knew about the uncertainty of initial results but found it hard to take it into account in their articles because of their respective hierarchy.More generally, the dissertation discusses the influence of extra-scientific factors upon the production of scientific knowledge. We conclude that the scientific assessment process based on the number of papers published in high impact factor journals, combined with the scientific institutions’ orientation towards the media, might undermine the reliability of scientific results, and this in academic publications as well as in the media. Indeed, journalists’ working conditions are deteriorating and most do not seem to properly grasp how scientific facts are produced. This might be damaging for public trust in biomedical research and public debate about health-related issues.De nombreux articles dans les journaux scientifiques font état du manque de reproductibilité des études biomédicales. Cette « crise de la reproductibilité » ne doit pas être confondue avec les problèmes de fraudes ou de plagiats. Elle recouvre un phénomène plus général aux disciplines scientifiques : un grand nombre de résultats publiés ne sont pas reproduits.Ce manque de reproductibilité n’est pas choquant en soi : la connaissance scientifique est un processus cumulatif qui évolue de résultats prometteurs mais incertains pour arriver à un consensus après réplication des observations par les pairs. L’incertitude est donc inhérente à la recherche en train de se faire. Cependant, cette incertitude ne semble pas être prise en compte dans les interactions entre recherche et société, notamment au travers des médias.Cette thèse s’intéresse à la façon dont l’incertitude est présentée dans les médias en se basant sur l’étude de la couverture médiatique de résultats de la recherche biomédicale dont la validité est connue. Nous avons constitué une large base de données regroupant des résultats de la recherche biomédicale couvrant 3 domaines de la recherche, la psychiatrie, la neurologie et un échantillon de 4 maladies somatiques. Nous avons sélectionné des études décrivant l’association de facteurs de risques (génétiques, environnementaux, biochimiques) avec différentes pathologies. La validité des études initiales a été calculée en comparant leurs résultats à ceux des méta-analyses sur le même sujet. Dans 65% des cas, les résultats des études initiales ne sont pas confirmés par ceux des méta-analyses et ce même si elles sont publiées dans les journaux prestigieux. Nous avons également identifié, parmi les études de la base de données, celles qui avaient retenu l’attention de la presse anglo-saxonne. Celle-ci privilégie les études scientifiques initiales publiées dans des journaux scientifiques prestigieux et présentant des implications directes pour le lecteur. La validité de ces études n’est pas meilleure que celles des publications scientifiques : plus de la moitié n’ont pas été confirmées et la presse ne s’en fait quasiment jamais l’écho. D’autre part, l’analyse du contenu des articles de presse révèle que les journalistes et leurs rédacteurs en chef ne prennent que rarement en compte l’incertitude scientifique. En effet, la majorité des articles précise qu'il s'agit bien d'une découverte initiale, mais seulement 21% mentionnent que la découverte doit être confirmée par des études ultérieures. Ces mentions sont principalement le fait des scientifiques et tendent à disparaître dans les articles les plus récents. Enfin, au travers d’entretiens semi-directifs réalisés auprès de journalistes scientifiques, nous avons confirmé que ceux-ci utilisaient volontiers les résultats publiés dans les journaux scientifiques prestigieux qu’ils considèrent comme des sources fiables. L’enquête révèle que ces journalistes méconnaissent le fonctionnement de la recherche : les deux tiers ne savent pas que les résultats initiaux sont incertains ou bien confondent incertitude et fraude. Quant au tiers restant, il indique les difficultés à faire valoir cette incertitude auprès de leur hiérarchie respective.Plus généralement, cette thèse discute de l’influence grandissante de facteurs extérieurs à l’activité scientifique dans le processus de production de connaissances. En particulier, la prise en compte par les chercheurs et les institutions scientifiques de critères d’intérêt médiatique pourrait influencer les stratégies de recherche et la fiabilité des résultats scientifiques. D’autre part, la détérioration des conditions de travail des journalistes et leur méconnaissance du fonctionnement de la recherche soulèvent des interrogations importantes sur la pertinence des informations présentées dans la presse et sur la qualité du débat public des questions de santé

    The reproducibility crisis in biomedical research : an analysis of the validity of biomedical studies published in peer-reviewed journals and their media coverage

    No full text
    De nombreux articles dans les journaux scientifiques font état du manque de reproductibilité des études biomédicales. Cette « crise de la reproductibilité » ne doit pas être confondue avec les problèmes de fraudes ou de plagiats. Elle recouvre un phénomène plus général aux disciplines scientifiques : un grand nombre de résultats publiés ne sont pas reproduits.Ce manque de reproductibilité n’est pas choquant en soi : la connaissance scientifique est un processus cumulatif qui évolue de résultats prometteurs mais incertains pour arriver à un consensus après réplication des observations par les pairs. L’incertitude est donc inhérente à la recherche en train de se faire. Cependant, cette incertitude ne semble pas être prise en compte dans les interactions entre recherche et société, notamment au travers des médias.Cette thèse s’intéresse à la façon dont l’incertitude est présentée dans les médias en se basant sur l’étude de la couverture médiatique de résultats de la recherche biomédicale dont la validité est connue. Nous avons constitué une large base de données regroupant des résultats de la recherche biomédicale couvrant 3 domaines de la recherche, la psychiatrie, la neurologie et un échantillon de 4 maladies somatiques. Nous avons sélectionné des études décrivant l’association de facteurs de risques (génétiques, environnementaux, biochimiques) avec différentes pathologies. La validité des études initiales a été calculée en comparant leurs résultats à ceux des méta-analyses sur le même sujet. Dans 65% des cas, les résultats des études initiales ne sont pas confirmés par ceux des méta-analyses et ce même si elles sont publiées dans les journaux prestigieux. Nous avons également identifié, parmi les études de la base de données, celles qui avaient retenu l’attention de la presse anglo-saxonne. Celle-ci privilégie les études scientifiques initiales publiées dans des journaux scientifiques prestigieux et présentant des implications directes pour le lecteur. La validité de ces études n’est pas meilleure que celles des publications scientifiques : plus de la moitié n’ont pas été confirmées et la presse ne s’en fait quasiment jamais l’écho. D’autre part, l’analyse du contenu des articles de presse révèle que les journalistes et leurs rédacteurs en chef ne prennent que rarement en compte l’incertitude scientifique. En effet, la majorité des articles précise qu'il s'agit bien d'une découverte initiale, mais seulement 21% mentionnent que la découverte doit être confirmée par des études ultérieures. Ces mentions sont principalement le fait des scientifiques et tendent à disparaître dans les articles les plus récents. Enfin, au travers d’entretiens semi-directifs réalisés auprès de journalistes scientifiques, nous avons confirmé que ceux-ci utilisaient volontiers les résultats publiés dans les journaux scientifiques prestigieux qu’ils considèrent comme des sources fiables. L’enquête révèle que ces journalistes méconnaissent le fonctionnement de la recherche : les deux tiers ne savent pas que les résultats initiaux sont incertains ou bien confondent incertitude et fraude. Quant au tiers restant, il indique les difficultés à faire valoir cette incertitude auprès de leur hiérarchie respective.Plus généralement, cette thèse discute de l’influence grandissante de facteurs extérieurs à l’activité scientifique dans le processus de production de connaissances. En particulier, la prise en compte par les chercheurs et les institutions scientifiques de critères d’intérêt médiatique pourrait influencer les stratégies de recherche et la fiabilité des résultats scientifiques. D’autre part, la détérioration des conditions de travail des journalistes et leur méconnaissance du fonctionnement de la recherche soulèvent des interrogations importantes sur la pertinence des informations présentées dans la presse et sur la qualité du débat public des questions de santé.Many academic publications are devoted to the « reproducibility crisis » in biomedical sciences. Their authors distinguish this lack of reproducibility from fraud or plagiarism. This “crisis” deals with a much larger phenomenon encompassing many scientific disciplines: a large amount of scientific results are disconfirmed by subsequent studies.This lack of reproducibility is to be expected: knowledge production is an incremental process where early, promising yet tentative findings are validated through replication. Indeed, scientific results are uncertain per se. The problem, however, is that this uncertainty does not seem to be taken into consideration when science “meets” the public, especially through the media.In this dissertation we studied how the media presented this uncertainty when dealing with biomedical findings. To do so we first created a large, original database of scientific studies investigating the association between risk factors (genetic, biochemical, environmental) and pathologies from three biomedical domains; psychiatry, neurology and a set of four somatic diseases. We evaluated the validity of each initial study by comparing their results to the result of meta-analyses on the same subject. The replication validity is low: 65% of initial studies are disconfirmed by corresponding meta-analysis even when they were published in high-ranking journals. We then identified which studies were selected by the press: initial studies published in prestigious journals and relevant to the readers were preferentially covered. Their validity was nonetheless poor with more than 50% being subsequently invalidated. The press rarely mentioned these frequent invalidations. Analysing the newspaper article contents, we found that journalists and their editors do not deal with scientific uncertainty. Indeed, the majority of newspaper articles referred to the study as being an initial study but only 21% indicated that the results needed to be replicated. Moreover those statements were made by scientists and have become scarce in most recent articles. A survey of 21 science journalists confirmed that journalists still consider high-ranking scientific journals to be reliable sources of information. However, these journalists were not familiar with the incremental process of knowledge production: two-thirds did not know that early findings were uncertain, or confused uncertainty with fraud. The other third knew about the uncertainty of initial results but found it hard to take it into account in their articles because of their respective hierarchy.More generally, the dissertation discusses the influence of extra-scientific factors upon the production of scientific knowledge. We conclude that the scientific assessment process based on the number of papers published in high impact factor journals, combined with the scientific institutions’ orientation towards the media, might undermine the reliability of scientific results, and this in academic publications as well as in the media. Indeed, journalists’ working conditions are deteriorating and most do not seem to properly grasp how scientific facts are produced. This might be damaging for public trust in biomedical research and public debate about health-related issues
    corecore